71 research outputs found

    Microphone array processing for parametric spatial audio techniques

    Get PDF
    Reproduction of spatial properties of recorded sound scenes is increasingly recognised as a crucial element of all emerging immersive applications, with domestic or cinema-oriented audiovisual reproduction for entertainment, telepresence and immersive teleconferencing, and augmented and virtual reality being key examples. Such applications benefit from a general spatial audio processing framework, being able to exploit spatial information from a variety of recording formats in order to reproduce the original sound scene in a perceptually transparent way. Directional Audio Coding (DirAC) is a recent parametric spatial sound reproduction method that fulfils many of the requirements of such a framework. It is based on a universal 3D audio format known as B-format and achieves flexible and effective perceptual reproduction for loudspeakers or headphones. Part of this work focuses on the model of DirAC and aims to extend it. Firstly, it is shown that by taking into account information of the four-channel recording array that generates the B-format signals, it is possible to improve both analysis of the sound scene and reproduction. Secondly, these findings are generalised for various recording configurations. A further generalisation of DirAC is attempted in a spatial transform domain, the spherical harmonic domain (SHD), with higher-order B-format signals. Formulating the DirAC model in the SHD combines the perceptual effectiveness of DirAC with the increased resolution of higher-order B-format and overcomes most limitations of traditional DirAC. Some novel applications of parametric processing of spatial sound are demonstrated for sound and music engineering. The first shows the potential of modifying the spatial information in the recording for creative manipulation of sound scenes, while the second shows improvement of music reproduction captured with established surround recording techniques.The effectiveness of parametric techniques in conveying distance and externalisation cues over headphones, led to research in controlling the perceived distance using loudspeakers in a room. This is achieved by manipulating the direct-to-reverberant energy ratio using a compact loudspeaker array with a variable directivity pattern. Lastly, apart from reproduction of recorded sound scenes, auralisation of the spatial properties of acoustical spaces are of interest. We demonstrate that this problem is well-suited to parametric spatial analysis. The nature of room impulse responses captured with a large microphone array allows very high-resolution approaches, and such approaches for detection and localisation of multiple reflections in a single short observation window are applied and compared

    Multi-Channel Masking with Learnable Filterbank for Sound Source Separation

    Full text link
    This work proposes a learnable filterbank based on a multi-channel masking framework for multi-channel source separation. The learnable filterbank is a 1D Conv layer, which transforms the raw waveform into a 2D representation. In contrast to the conventional single-channel masking method, we estimate a mask for each individual microphone channel. The estimated masks are then applied to the transformed waveform representation like in the traditional filter-and-sum beamforming operation. Specifically, each mask is used to multiply the corresponding channel's 2D representation, and the masked output of all channels are then summed. At last, a 1D transposed Conv layer is used to convert the summed masked signal into the waveform domain. The experimental results show our method outperforms single-channel masking with a learnable filterbank and can outperform multi-channel complex masking with STFT complex spectrum in the STGCSEN model if a learnable filterbank is transformed to a higher feature dimension. The spatial response analysis also verifies that multi-channel masking in the learnable filterbank domain has spatial selectivity

    Localization, Detection and Tracking of Multiple Moving Sound Sources with a Convolutional Recurrent Neural Network

    Get PDF
    This paper investigates the joint localization, detection, and tracking of sound events using a convolutional recurrent neural network (CRNN). We use a CRNN previously proposed for the localization and detection of stationary sources, and show that the recurrent layers enable the spatial tracking of moving sources when trained with dynamic scenes. The tracking performance of the CRNN is compared with a stand-alone tracking method that combines a multi-source (DOA) estimator and a particle filter. Their respective performance is evaluated in various acoustic conditions such as anechoic and reverberant scenarios, stationary and moving sources at several angular velocities, and with a varying number of overlapping sources. The results show that the CRNN manages to track multiple sources more consistently than the parametric method across acoustic scenarios, but at the cost of higher localization error.202

    Filter design for real-time ambisonics encoding during wave-based acoustic simulations

    Get PDF
    The ambisonics format is a powerful audio tool designed for spatial encoding of the pressure field. An under-exploited feature of this format is that it can be directly extracted from virtual acoustics simulations. Finite Difference Time Domain (FDTD) simulations are particularly adapted as they simplify greatly the problem of extracting spatially-encoded signals, and enable real-time processing of the simulated pressure field. In this short contribution, we first write a time domain representation of the ambisonics channels, in terms of spatial derivatives of the acoustic field at the receiver location, and formulated as a set of ordinary differential equations. We show that in general, the natural corresponding discrete recursive integration yields a prohibitive polynomial drift in time. We then describe a real-time filtering strategy which stabilizes this numerical integration; in the discrete-time setting of FDTD simulations, this real-time filtering process features very low computational costs, avoiding the latency associated with large convolutions and frequency-domain block processing of previous approaches.publishedVersio

    Position tracking of a varying number of sound sources with sliding permutation invariant training

    Full text link
    Recent data- and learning-based sound source localization (SSL) methods have shown strong performance in challenging acoustic scenarios. However, little work has been done on adapting such methods to track consistently multiple sources appearing and disappearing, as would occur in reality. In this paper, we present a new training strategy for deep learning SSL models with a straightforward implementation based on the mean squared error of the optimal association between estimated and reference positions in the preceding time frames. It optimizes the desired properties of a tracking system: handling a time-varying number of sources and ordering localization estimates according to their trajectories, minimizing identity switches (IDSs). Evaluation on simulated data of multiple reverberant moving sources and on two model architectures proves its effectiveness on reducing identity switches without compromising frame-wise localization accuracy.Comment: Accepted for publication at the 31st European Signal Processing Conference (EUSIPCO 2023

    Deep neural network Based Low-latency Speech Separation with Asymmetric analysis-Synthesis Window Pair

    Get PDF
    Time-frequency masking or spectrum prediction computed via short symmetric windows are commonly used in low-latency deep neural network (DNN) based source separation. In this paper, we propose the usage of an asymmetric analysis-synthesis window pair which allows for training with targets with better frequency resolution, while retaining the low-latency during inference suitable for real-time speech enhancement or assisted hearing applications. In order to assess our approach across various model types and datasets, we evaluate it with both speaker-independent deep clustering (DC) model and a speaker-dependent mask inference (MI) model. We report an improvement in separation performance of up to 1.5 dB in terms of source-to-distortion ratio (SDR) while maintaining an algorithmic latency of 8 ms.Comment: Accepted to EUSIPCO-202

    Near-field evaluation of reproducible speech sources

    Get PDF
    The spatial speech reproduction capabilities of a KEMAR mouth simulator, a loudspeaker, the piston on the sphere model, and a circular harmonic fitting are evaluated in the near-field. The speech directivity of 24 human subjects, both male and female, is measured using a semicircular microphone array with a radius of 36.5 cm in the horizontal plane. Impulse responses are captured for the two devices, and filters are generated for the two numerical models to emulate their directional effect on speech reproduction. The four repeatable speech sources are evaluated through comparison to the recorded human speech both objectively, through directivity pattern and spectral magnitude differences, and subjectively, through a listening test on perceived coloration. Results show that the repeatable sources perform relatively well under the metric of directivity, but irregularities in their directivity patterns introduce audible coloration for off-axis directions.Peer reviewe
    corecore